-
-
Notifications
You must be signed in to change notification settings - Fork 29
Removed the hardcoded Ollama Models and now works with any Ollama Model #385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughReplaces the hardcoded Ollama model ("llama3.2") with dynamic resolution (env var OLLAMA_MODEL → ~/.cortex/config.json → fallback "llama3.2"), adds interactive model selection in the first-run wizard, and updates tests to use the environment/config-driven Ollama model selection. Changes
Sequence Diagram(s)(omitted — changes are localized to model selection and interactive setup and do not introduce a multi-component sequential flow requiring visualization) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/ask.pycortex/first_run_wizard.pytests/test_ask.pytests/test_cli.pytests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/first_run_wizard.pytests/test_ask.pytests/test_cli.pycortex/ask.pytests/test_ollama_integration.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_ask.pytests/test_cli.pytests/test_ollama_integration.py
🧬 Code graph analysis (1)
tests/test_ollama_integration.py (1)
scripts/setup_ollama.py (1)
test_model(315-343)
🪛 GitHub Actions: CI
cortex/first_run_wizard.py
[error] 390-390: W293 Blank line contains whitespace
🪛 GitHub Check: lint
cortex/first_run_wizard.py
[failure] 390-390: Ruff (W293)
cortex/first_run_wizard.py:390:1: W293 Blank line contains whitespace
tests/test_ask.py
[failure] 267-267: Ruff (W293)
tests/test_ask.py:267:1: W293 Blank line contains whitespace
[failure] 261-261: Ruff (W293)
tests/test_ask.py:261:1: W293 Blank line contains whitespace
[failure] 256-256: Ruff (W293)
tests/test_ask.py:256:1: W293 Blank line contains whitespace
[failure] 253-253: Ruff (W293)
tests/test_ask.py:253:1: W293 Blank line contains whitespace
tests/test_cli.py
[failure] 68-68: Ruff (W293)
tests/test_cli.py:68:1: W293 Blank line contains whitespace
🪛 GitHub Check: Lint
cortex/first_run_wizard.py
[failure] 390-390: Ruff (W293)
cortex/first_run_wizard.py:390:1: W293 Blank line contains whitespace
tests/test_ask.py
[failure] 267-267: Ruff (W293)
tests/test_ask.py:267:1: W293 Blank line contains whitespace
[failure] 261-261: Ruff (W293)
tests/test_ask.py:261:1: W293 Blank line contains whitespace
[failure] 256-256: Ruff (W293)
tests/test_ask.py:256:1: W293 Blank line contains whitespace
[failure] 253-253: Ruff (W293)
tests/test_ask.py:253:1: W293 Blank line contains whitespace
tests/test_cli.py
[failure] 68-68: Ruff (W293)
tests/test_cli.py:68:1: W293 Blank line contains whitespace
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Agent
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (6)
cortex/first_run_wizard.py (1)
374-407: LGTM! User-driven model selection aligns with PR objectives.The implementation successfully replaces the hardcoded "llama3.2" default with a user-friendly selection menu. The chosen model is properly stored in
config["ollama_model"]for later use by the dynamic model resolution logic incortex/ask.py.tests/test_ollama_integration.py (3)
27-46: LGTM! Robust helper for dynamic model detection.The
get_available_ollama_model()helper elegantly solves the problem of tests assuming specific models are installed. It parsesollama listoutput and returns the first available model, allowing tests to work with any Ollama installation.
49-53: LGTM! Enhanced skip condition prevents false test failures.The updated
pytestmarknow checks both Ollama installation and model availability, ensuring tests skip gracefully when requirements aren't met rather than failing.
95-111: LGTM! Test respects environment-driven model selection.The test now uses
OLLAMA_MODELenvironment variable first, then falls back toget_available_ollama_model(), matching the priority established incortex/ask.py. This ensures tests verify the actual production behavior.cortex/ask.py (2)
180-202: LGTM! Clean implementation of dynamic Ollama model resolution.The
_get_ollama_model()method correctly implements the three-tier priority described in the PR objectives:
OLLAMA_MODELenvironment variable~/.cortex/config.json→ollama_modelfield- Fallback to
"llama3.2"The silent exception handling (line 198-199) is appropriate for this fallback scenario, ensuring config read errors don't break the user experience.
Based on PR objectives and coding guidelines.
169-178: LGTM! Proper delegation to dynamic model resolver.The change from hardcoded
"llama3.2"toself._get_ollama_model()enables the flexible model selection that is the core objective of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR removes the hardcoded llama3.2 model from the Ollama integration, enabling users to configure any Ollama model via environment variables or configuration files. This resolves issue #383 where users were unable to specify alternative models like mistral, phi3, or llama3.1:8b.
Key Changes
- Added configurable Ollama model selection with priority:
OLLAMA_MODELenvironment variable →~/.cortex/config.json→ defaultllama3.2 - Enhanced first-run wizard with 5 model options (llama3.2, llama3.2:1b, mistral, phi3, custom)
- Updated tests to auto-detect available Ollama models instead of assuming llama3.2
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| cortex/ask.py | Added _get_ollama_model() method to read model from environment variable or config file with fallback to llama3.2 |
| cortex/first_run_wizard.py | Updated wizard to offer 5 model choices instead of forcing llama3.2 installation |
| tests/test_ask.py | Modified test to verify model selection from environment variable and config |
| tests/test_ollama_integration.py | Added get_available_ollama_model() helper and updated all tests to use detected models |
| tests/test_cli.py | Updated test expectation to reflect that Ollama fallback allows install to succeed without API key |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
cortex/first_run_wizard.py (1)
390-390: Fix trailing whitespace on line 390.Line 390 contains trailing whitespace, causing pipeline failures.
tests/test_ask.py (1)
253-253: Fix trailing whitespace on blank lines.Multiple blank lines contain trailing whitespace (lines 253, 256, 261, 267), causing linting failures. These were flagged in the previous review and need to be addressed.
As per pipeline failures and coding guidelines.
Also applies to: 256-256, 261-261, 267-267
🧹 Nitpick comments (2)
tests/test_ask.py (2)
252-252: Remove redundant import.The
osmodule is already imported at the top of the file (line 3), so this import statement is unnecessary.🔎 Proposed fix
def test_default_model_ollama(self): """Test default model for Ollama.""" # Test with environment variable - import os # Save and clear any existing OLLAMA_MODEL original_model = os.environ.get("OLLAMA_MODEL")As per PEP 8 coding guidelines.
268-272: Consider verifying the default fallback value.The test correctly validates that a model is configured, but it doesn't verify the specific fallback default. According to the PR objectives, when neither
OLLAMA_MODELnor the config file specifies a model, it should default to"llama3.2".Consider adding a test case that explicitly verifies this fallback behavior, possibly by temporarily moving/mocking the config file to ensure the default is tested in isolation.
💡 Example approach
You could add an additional test or expand this one to verify the default:
# Test with mocked config that returns None for ollama_model with patch('cortex.ask.load_config', return_value={}): handler3 = AskHandler(api_key="test", provider="ollama") self.assertEqual(handler3.model, "llama3.2")This would ensure the fallback default is properly tested per the PR objectives.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/first_run_wizard.pytests/test_ask.pytests/test_cli.pytests/test_ollama_integration.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/test_cli.py
- tests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/first_run_wizard.pytests/test_ask.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_ask.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (1)
cortex/first_run_wizard.py (1)
374-409: Good implementation of dynamic model selection.The model selection menu provides a good user experience with multiple preset options and custom model input. The implementation correctly:
- Offers five sensible model choices with size information
- Allows custom model names
- Uses the selected model throughout (pull command, error messages)
- Stores the selection in
config["ollama_model"]for dynamic resolutionThis aligns well with the PR objectives to remove hardcoded "llama3.2" references.
|
@Suyashd999 i have verified the changes made by this PR and it fulfills the requirements of issue #383 |
|
@sujay-d07 please fix the conflicts |
…lines in various test files and the first_run_wizard module.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
CLA Verification FailedThe following contributors have not signed the Contributor License Agreement:
How to Sign
Verified SignersThis check runs automatically. Maintainers can update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/test_cli.py (1)
62-82: Critical: Unresolved merge conflict markers.The file contains Git merge conflict markers (
<<<<<<< HEAD,=======,>>>>>>>), making it syntactically invalid Python. This is causing all pipeline failures and preventing tests from running.You need to manually resolve this conflict by:
- Reviewing both versions of
test_install_no_api_key(lines 63-70 vs 72-81)- Deciding which implementation to keep or merging them
- Removing all conflict markers (
<<<<<<<,=======,>>>>>>>)Based on the PR objectives, the new version (lines 72-81) that mocks
CommandInterpreterand expects success with Ollama appears to be the intended implementation.🔎 Recommended resolution
Remove the HEAD version and conflict markers, keeping only the new implementation:
@patch.dict(os.environ, {}, clear=True) -<<<<<<< HEAD - def test_install_no_api_key(self): - # When no API key is set, the CLI falls back to Ollama. - # If Ollama is running, this should succeed. If not, it should fail. - # We'll mock Ollama to be unavailable to test the failure case. - with patch("cortex.llm.interpreter.CommandInterpreter.parse") as mock_parse: - mock_parse.side_effect = RuntimeError("Ollama not available") - result = self.cli.install("docker") - self.assertEqual(result, 1) -======= @patch("cortex.cli.CommandInterpreter") def test_install_no_api_key(self, mock_interpreter_class): # Should work with Ollama (no API key needed) mock_interpreter = Mock() mock_interpreter.parse.return_value = ["apt update", "apt install docker"] mock_interpreter_class.return_value = mock_interpreter result = self.cli.install("docker") # Should succeed with Ollama as fallback provider self.assertEqual(result, 0) ->>>>>>> a2e81e1 (Removed the hardcoded Ollama Models and now works with any Ollama Model)
♻️ Duplicate comments (1)
cortex/first_run_wizard.py (1)
389-394: Critical: KeyError on invalid model choice.Line 394 will raise
KeyErrorif the user enters an invalid choice (e.g., "6", "abc", or any value not in "1"-"5"). The code only checks for "5" at line 391, but doesn't validate "1"-"4" before the dictionary lookup.🔎 Proposed fix
choice = self._prompt("\nEnter choice [1]: ", default="1") if choice == "5": model_name = self._prompt("Enter model name: ", default="llama3.2") else: - model_name = model_choices[choice] + model_name = model_choices.get(choice, "llama3.2")Alternatively, add input validation:
choice = self._prompt("\nEnter choice [1]: ", default="1") + + # Validate choice + while choice not in model_choices and choice != "5": + print("Invalid choice. Please enter a number between 1 and 5.") + choice = self._prompt("\nEnter choice [1]: ", default="1") if choice == "5": model_name = self._prompt("Enter model name: ", default="llama3.2") else: model_name = model_choices[choice]
🧹 Nitpick comments (1)
tests/test_ask.py (1)
245-257: Consider using@patch.dictfor cleaner test isolation.The test manually saves and restores the
OLLAMA_MODELenvironment variable. While this works, using@patch.dict(os.environ, ...)provides automatic cleanup even if the test fails and is more consistent with other tests in the file (e.g.,test_install_no_api_keyintest_cli.py).🔎 Proposed refactor
- def test_default_model_ollama(self): + @patch.dict(os.environ, {"OLLAMA_MODEL": "test-model"}) + def test_default_model_ollama(self): """Test default model for Ollama.""" - # Test with environment variable - - # Save and clear any existing OLLAMA_MODEL - original_model = os.environ.get("OLLAMA_MODEL") - # Test with custom env variable - os.environ["OLLAMA_MODEL"] = "test-model" handler = AskHandler(api_key="test", provider="ollama") self.assertEqual(handler.model, "test-model") - - # Clean up - if original_model is not None: - os.environ["OLLAMA_MODEL"] = original_model - else: - os.environ.pop("OLLAMA_MODEL", None)Note: You'll need to test the default behavior separately, potentially in a new test method without the env var set.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/ask.pycortex/first_run_wizard.pytests/test_ask.pytests/test_cli.pytests/test_ollama_integration.py
🚧 Files skipped from review as they are similar to previous changes (2)
- cortex/ask.py
- tests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_ask.pycortex/first_run_wizard.pytests/test_cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_ask.pytests/test_cli.py
🧬 Code graph analysis (1)
tests/test_cli.py (1)
cortex/cli.py (1)
install(320-566)
🪛 GitHub Actions: Cortex Automation
tests/test_cli.py
[error] 62-62: IndentationError: unexpected unindent. Possible merge conflict markers (e.g., '<<<<<<< HEAD') in the file.
🪛 GitHub Check: lint
tests/test_cli.py
[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:7: invalid-syntax: Expected a statement
[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:5: invalid-syntax: Expected a statement
[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:3: invalid-syntax: Expected a statement
[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:1: invalid-syntax: Expected class, function definition or async function definition after decorator
[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:7: invalid-syntax: Expected a statement
[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:5: invalid-syntax: Expected a statement
[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:3: invalid-syntax: Expected a statement
[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:1: invalid-syntax: Expected a statement
🪛 GitHub Check: Lint
tests/test_cli.py
[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:7: invalid-syntax: Expected a statement
[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:5: invalid-syntax: Expected a statement
[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:3: invalid-syntax: Expected a statement
[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:1: invalid-syntax: Expected class, function definition or async function definition after decorator
[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:7: invalid-syntax: Expected a statement
[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:5: invalid-syntax: Expected a statement
[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:3: invalid-syntax: Expected a statement
[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:1: invalid-syntax: Expected a statement
| with ( | ||
| tempfile.TemporaryDirectory() as tmpdir, | ||
| patch("os.path.expanduser", return_value=tmpdir), | ||
| ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Wrong patch target breaks test isolation.
Line 263 patches os.path.expanduser, but the implementation in cortex/ask.py uses Path.home() to locate the config file:
config_file = Path.home() / ".cortex" / "config.json"Patching os.path.expanduser won't affect Path.home(), so the test will still read from the real ~/.cortex/config.json instead of the temporary directory, making the test non-deterministic and environment-dependent.
🔎 Proposed fix
+ from pathlib import Path
+
# Test deterministic default behavior when no env var or config file exists.
# Point the home directory to a temporary location without ~/.cortex/config.json
with (
tempfile.TemporaryDirectory() as tmpdir,
- patch("os.path.expanduser", return_value=tmpdir),
+ patch("pathlib.Path.home", return_value=Path(tmpdir)),
):
handler2 = AskHandler(api_key="test", provider="ollama")
# When no env var and no config file exist, AskHandler should use its built-in default.
self.assertEqual(handler2.model, "llama3.2")🤖 Prompt for AI Agents
In tests/test_ask.py around lines 261 to 264, the test currently patches
os.path.expanduser but cortex/ask.py uses Path.home(), so the test still reads
the real ~/.cortex/config.json; change the patch to target pathlib.Path.home
(e.g. patch("pathlib.Path.home", return_value=Path(tmpdir)) or
patch.object(Path, "home", return_value=Path(tmpdir))) and ensure tmpdir is
converted to a Path before returning so the code under test resolves the config
file inside the temporary directory.
…pdate dependency check formatting; enhance CLI tests for Ollama fallback behavior
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/scripts/cla_check.py (1)
1-389: Inconsistency: File changes unrelated to PR objectives.This PR is titled "Removed the hardcoded Ollama Models" and the objectives describe changes to Ollama model configuration, environment variable resolution, and test updates. However, the file under review (
.github/scripts/cla_check.py) is a CLA enforcement script with only formatting and type hint improvements—completely unrelated to Ollama.Best practice: separate unrelated changes into distinct PRs to maintain clear scope, simplify reviews, and keep git history clean.
🧹 Nitpick comments (1)
tests/test_cli.py (1)
62-71: Test correctly validates Ollama fallback behavior.The test properly mocks
CommandInterpreterand verifies that installation succeeds with Ollama when no API key is available. The mocking pattern is consistent with other similar tests in this file.💡 Optional: Consider adding assertions to verify provider selection
To make the test more explicit about Ollama fallback behavior, you could add assertions to verify the provider and API key:
@patch.dict(os.environ, {}, clear=True) @patch("cortex.cli.CommandInterpreter") def test_install_no_api_key(self, mock_interpreter_class): # Should work with Ollama (no API key needed) mock_interpreter = Mock() mock_interpreter.parse.return_value = ["apt update", "apt install docker"] mock_interpreter_class.return_value = mock_interpreter result = self.cli.install("docker") # Should succeed with Ollama as fallback provider self.assertEqual(result, 0) # Verify Ollama was used as the provider mock_interpreter_class.assert_called_once() call_kwargs = mock_interpreter_class.call_args[1] self.assertEqual(call_kwargs.get("provider"), "ollama") self.assertEqual(call_kwargs.get("api_key"), "ollama-local")This would make the Ollama fallback behavior more explicit in the test verification.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/scripts/cla_check.pycortex/dependency_check.pycortex/llm/interpreter.pytests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.pytests/test_cli.pycortex/dependency_check.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (6)
cortex/llm/interpreter.py (1)
114-117: LGTM! Clean formatting improvement.The multi-line argument formatting with trailing comma follows PEP 8 style guidelines and improves readability without any semantic changes.
cortex/dependency_check.py (1)
46-46: LGTM! Consistent formatting.The comma-space separator for the package list improves readability in the error message.
.github/scripts/cla_check.py (4)
137-137: LGTM: Formatting improvements enhance readability.The reformatting of API calls and JSON parameters to single lines improves consistency and readability without changing behavior.
Also applies to: 242-242, 260-260, 265-265, 291-291, 304-304, 312-312
339-348: LGTM: Multi-line bot list improves maintainability.Reorganizing the bot patterns list to multi-line format enhances readability and makes future diffs cleaner when adding or removing bot patterns.
11-11: LGTM: Import grouping follows PEP 8.The blank line correctly separates standard library imports from third-party imports, adhering to PEP 8 style guidelines.
89-89: No action needed. The project requires Python 3.10+ as the minimum supported version, which fully supports the union type syntax (str | None) used in the type hints. Thefrom __future__ import annotationsworkaround is unnecessary.
|
recheck |



Related Issue
Closes #383
Summary
Removed hardcoded llama3.2 model from Ollama integration. Users can now configure any Ollama model via environment variables or config file.
Changes
Old Behaviour:
New Behaviour:
Summary by CodeRabbit
New Features
Bug Fixes / Improvements
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.